198

14.1

People Question, Computers Follow Programs

Computers often calculate so fast that one is tempted to assume they can think. However,

there are numerous differences to living beings. In particular, computers are not alive, so

they do not operate in an environment and cannot reproduce. Therefore, meaning (e.g.,

food, fear, freedom, etc.) in the strict sense is also not possible for a computer; the com­

puter, on the other hand, formally reason with logical chains of reasoning. But for formal

systems it is true that they are either closed and then one cannot formulate provable state­

ments for these closed systems or they are not clearly delimited. More precisely, there are

the two Gödel incompleteness theorems.

The first incompleteness theorem proves that there are always unprovable statements in

sufficiently strong, contradiction-free systems. The second incompleteness theorem shows

that sufficiently strong, non-contradictory systems cannot prove their own non-­

contradiction. So for such fundamental statements, the computer remains in the undecid­

able. In contrast, we can at least think about our fundamentals any time we like. But it is

also clear that humans do not always think and decide without contradictions. This is also

true in general: Biological systems are primarily not decision-making or computational

systems, but living beings that have to survive, especially in their environment. For the

same reason, decisions, even of a fundamental nature (e.g. should a cell divide or not),

quickly become fuzzy (sometimes even a bit random). But evolution ensures that this

fuzziness is adjusted precisely so that we can survive as well as possible with the resulting

decisions and we also have a sufficiently accurate picture of the environment in which we

act as living beings for this purpose.

Basically, this phenomenon is also easy to understand. Formal systems are either

closed, then one can drive them into a contradiction or at least into a statement undecidable

for them, if they have to think about their foundations, or they are not closed (then one can

formally add additional statements in case of emergency). People, on the other hand, do

like to think about themselves, and they also (usually) manage to get back to everyday

work afterwards. It is important to realize that this is a very basic barrier between humans

and computers. As long as the computer closes correctly and logically exactly like a for­

mal system, it will not get beyond this “Gödel limit“, i.e. it will never really be able to

think about itself. There is no concept of meaning and no real life in a real environment.

Instead, if you would create artificial life, you would be able to cross this border, but every

type of life in nature is equal, has the same right to live, be it human, an insect or bacteria,

including also any future type of artificial life. However, as we are not even treating all

humans equal, we are not ethical mature enough to try to create artificial life. Luckily, the

technological hurdles towards artificial life are also enormous.

After this consideration of clear boundaries of computers as formal systems versus

humans as living, feeling and acting beings, the infobox gives some cornerstones of artifi­

cial intelligence. The important thing to take away is that humans should at least be

14  We Can Think About Ourselves – The Computer Cannot